深度学习的成功激发了人们对大脑是否使用基于梯度的学习来学习层次结构表示的兴趣。但是,目前在深层神经网络中基于梯度的信用分配的生物学上合理的方法需要无限的小反馈信号,这在生物学上现实的嘈杂环境中是有问题的,并且与神经科学的实验证据不符,表明自上而下的反馈可以显着影响神经活动。在最近提出的一种信用分配方法的深度反馈控制(DFC)的基础上,我们结合了对神经活动的强烈反馈影响与基​​于梯度的学习,并表明这自然会导致对神经网络优化的新看法。权重更新并没有逐渐将网络权重转换为具有低输出损失的配置,而是逐渐最大程度地减少了将网络驱动到监督输出标签的控制器所需的反馈量。此外,我们表明,在DFC中使用强反馈的使用允许同时学习和反馈连接,并在时空中完全本地学习规则。我们通过对标准计算机视觉基准测试的实验来补充我们的理论结果,显示了反向传播的竞争性能以及对噪声的鲁棒性。总体而言,我们的工作提出了一种从根本上新颖的学习视图,作为控制最小化,同时避开了生物学上不切实际的假设。
translated by 谷歌翻译
虽然神经网络是强大的功能近似器,但底层建模假设最终定义了它们是参数化的假设类。在分类中,随着常用的SoftMax能够代表任何分类分布,这些假设很小。然而,在回归中,通常放置了要实现的连续分布类型的限制假设,如通过平均平均误差及其潜在的高斯度假的训练的主导选择。最近,建模前进允许对要建模的连续分布的类型无关,授予回归分类模型的灵活性。虽然过去的研究在表现方面强调了这种灵活的回归模型的益处,但在这里我们研究了模型选择对不确定性估计的影响。我们强调,根据模型拼写,炼狱不确定性没有妥善捕获,并且贝叶斯治疗错过的模型导致不可靠的认知不确定性估计。总体而言,我们的研究概述了回归中的建模选择如何影响不确定性估计,从而概述任何下游决策过程。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
This paper proposes a new algorithm for an automatic variable selection procedure in High Dimensional Graphical Models. The algorithm selects the relevant variables for the node of interest on the basis of mutual information. Several contributions in literature have investigated the use of mutual information in selecting the appropriate number of relevant features in a large data-set, but most of them have focused on binary outcomes or required high computational effort. The algorithm here proposed overcomes these drawbacks as it is an extension of Chow and Liu's algorithm. Once, the probabilistic structure of a High Dimensional Graphical Model is determined via the said algorithm, the best path-step, including variables with the most explanatory/predictive power for a variable of interest, is determined via the computation of the entropy coefficient of determination. The latter, being based on the notion of (symmetric) Kullback-Leibler divergence, turns out to be closely connected to the mutual information of the involved variables. The application of the algorithm to a wide range of real-word and publicly data-sets has highlighted its potential and greater effectiveness compared to alternative extant methods.
translated by 谷歌翻译
最近,致力于通过现代机器学习方法预测脑部疾病的最新神经影像学研究通常包括单一模态并依靠监督的过度参数化模型。但是,单一模态仅提供了高度复杂的大脑的有限视图。至关重要的是,临床环境中的有监督模型缺乏用于培训的准确诊断标签。粗标签不会捕获脑疾病表型的长尾谱,这导致模型的普遍性丧失,从而使它们在诊断环境中的有用程度降低。这项工作提出了一个新型的多尺度协调框架,用于从多模式神经影像数据中学习多个表示。我们提出了一般的归纳偏见分类法,以捕获多模式自学融合中的独特和联合信息。分类法构成了一个无解码器模型的家族,具有降低的计算复杂性,并捕获多模式输入的本地和全局表示之间的多尺度关系。我们使用各种阿尔茨海默氏病表型中使用功能和结构磁共振成像(MRI)数据对分类法进行了全面评估,并表明自我监督模型揭示了与疾病相关的大脑区域和多模态链接,而无需在预先访问PRE-PRE-the PRE-the PRE-the PRE-the PRE-PRECTEN NICKES NOCKER NOCKER NOCKER NOCKER NOCKER NOCE访问。训练。拟议的多模式自学学习的学习能够表现出两种模式的分类表现。伴随的丰富而灵活的无监督的深度学习框架捕获了复杂的多模式关系,并提供了符合或超过更狭窄的监督分类分析的预测性能。我们提供了详尽的定量证据,表明该框架如何显着提高我们对复杂脑部疾病中缺失的联系的搜索。
translated by 谷歌翻译
目的:大大缩短定量3D化学交换饱和转移(CEST)和半固体磁化转移(MT)成像所需的采集时间,并允许快速化学交换参数图重建。方法:三维CEST和MT磁共振指纹(MRF)数据集的L-精氨酸幻象,全脑,全脑和小腿肌肉的健康志愿者,癌症患者和心脏病患者是使用3T临床扫描仪在3T不同的位点使用3T临床扫描仪获得的3种不同的扫描仪模型和线圈。然后,设计和训练了一个生成的对抗网络监督框架(GAN-CEST),以学习从减少的输入数据空间到定量交换参数空间的映射,同时保留感知和定量内容。结果:GAN-CEST 3D采集时间为42-52秒,比CEST-MRF短70%。整个大脑的定量重建需要0.8秒。在地面真相和基于GAN的L-精氨酸浓度和pH值之间观察到了极好的一致性(Pearson的R> 0.97,NRMSE <1.5%)。来自脑肿瘤受试者的gan-cest图像产生的半固体量分数和汇率NRMSE为3.8 $ \ pm $ 1.3%和4.6 $ \ pm $ 1.3%,SSIM和96.3 $ \ pm $ \ pm $ 1.6%和95.0 $ \ pm $ 2.4%。半固体交换参数的NRMSE <7%和SSIM> 94%的小腿肌肉交换参数的映射。与MRF相比,在具有较大敏感性伪像的区域中,Gan-Cest表现出改善的性能和噪声降低。结论:Gan-Cest可以大大减少定量半固体MT/CEST映射的获取时间,同时即使在训练过程中无法使用的病理和扫描仪模型时,也可以保持性能。
translated by 谷歌翻译
启动和抗精气可以通过错误驱动的学习来建模(Marsolek,2008),假设学习质量的影响对目标刺激的处理进行了学习。这意味着参与者在启动研究中不断学习,并预测他们在其他心理语言实验的每项试验中也在学习。这项研究调查了在词汇决策实验中是否可以检测到试验学习。我们使用了判别词典模型(DLM; Baayen等,2019),这是一种具有分布语义的含义表示的精神词典模型,该模型具有分布语义的含义表示,该模型以Widrow-hoff规则为增量学习模型。我们使用了英国词典项目(BLP; Keuleers等,2012)的数据,并对每个受试者单独进行试用基础进行了DLM模拟词汇决策实验。然后,使用源自DLM模拟作为预测因子的措施预测单词和非单词的反应时间。使用两个受试者的数据开发模型,并对所有其他受试者进行了测试。我们从两个模拟中为每个主题提取了措施(一个在试验之间进行了学习更新,一个没有),并将其用作两个GAM的输入。基于学习的模型比大多数受试者的非学习模型表现出更好的模型拟合度。我们的措施还提供了有关词汇处理的见解,并使我们能够通过线性混合模型探索个体差异。这证明了DLM对行为数据进行建模的潜力,并得出这样的结论:在心理语言实验中确实可以检测到试验到审判的学习。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of <instrument, verb, target> combination delivers comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms by competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.
translated by 谷歌翻译
制定了具有机器学习模拟(骆驼)项目的宇宙学和天体物理学,通过数千名宇宙的流体动力模拟和机器学习将宇宙学与天体物理学结合起来。骆驼包含4,233个宇宙学仿真,2,049个n-body和2,184个最先进的流体动力模拟,在参数空间中采样巨大的体积。在本文中,我们介绍了骆驼公共数据发布,描述了骆驼模拟的特性和由它们产生的各种数据产品,包括光环,次麦,银河系和空隙目录,功率谱,Bispectra,Lyman - $ \ Alpha $光谱,概率分布函数,光环径向轮廓和X射线光子列表。我们还释放了超过骆驼 - 山姆的数十亿个星系的目录:与Santa Cruz半分析模型相结合的大量N身体模拟。我们释放包含350多个Terabytes的所有数据,并包含143,922个快照,数百万光环,星系和摘要统计数据。我们提供有关如何访问,下载,读取和处理数据AT \ URL {https://camels.readthedocs.io}的进一步技术详细信息。
translated by 谷歌翻译